Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make moderation optional in llms.py, remove references to gpt-4-turbo-1106-preview as json mode is now general release #255

Merged
merged 1 commit into from
May 8, 2024

Conversation

nonprofittechy
Copy link
Member

@nonprofittechy nonprofittechy commented May 8, 2024

Fix #254

For speed and efficiency, automators may want to skip the moderation endpoint that is usually called by the chat_completion function. All queries are moderated by default as a best practice to avoid the account being flagged, but if you know the query is safe, this change allows you to skip it.

Also removed some logging statements that could have slowed down performance.

…gpt-4-turbo-1106-preview as json mode is now general release
@nonprofittechy nonprofittechy merged commit 970fc17 into main May 8, 2024
4 checks passed
@nonprofittechy nonprofittechy deleted the llm-updates branch May 8, 2024 21:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make calling the moderation endpoint in llms.py optional (add a parameter)
1 participant